首页> 外文OA文献 >Modality-specific Cross-modal Similarity Measurement with Recurrent Attention Network
【2h】

Modality-specific Cross-modal Similarity Measurement with Recurrent Attention Network

机译:具有周期性的模态特定的跨模态相似性度量   注意网络

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Nowadays, cross-modal retrieval plays an indispensable role to flexibly findinformation across different modalities of data. Effectively measuring thesimilarity between different modalities of data is the key of cross-modalretrieval. Different modalities such as image and text have imbalanced andcomplementary relationships, which contain unequal amount of information whendescribing the same semantics. For example, images often contain more detailsthat cannot be demonstrated by textual descriptions and vice versa. Existingworks based on Deep Neural Network (DNN) mostly construct one common space fordifferent modalities to find the latent alignments between them, which losetheir exclusive modality-specific characteristics. Different from the existingworks, we propose modality-specific cross-modal similarity measurement (MCSM)approach by constructing independent semantic space for each modality, whichadopts end-to-end framework to directly generate modality-specific cross-modalsimilarity without explicit common representation. For each semantic space,modality-specific characteristics within one modality are fully exploited byrecurrent attention network, while the data of another modality is projectedinto this space with attention based joint embedding to utilize the learnedattention weights for guiding the fine-grained cross-modal correlationlearning, which can capture the imbalanced and complementary relationshipsbetween different modalities. Finally, the complementarity between the semanticspaces for different modalities is explored by adaptive fusion of themodality-specific cross-modal similarities to perform cross-modal retrieval.Experiments on the widely-used Wikipedia and Pascal Sentence datasets as wellas our constructed large-scale XMediaNet dataset verify the effectiveness ofour proposed approach, outperforming 9 state-of-the-art methods.
机译:如今,跨模式检索在跨数据的不同模式灵活地查找信息方面起着不可或缺的作用。有效地测量不同数据模式之间的相似性是跨模式检索的关键。诸如图像和文本之类的不同模式具有不平衡和互补的关系,当描述相同的语义时,它们包含不平等的信息量。例如,图像通常包含更多的细节,而文本描述无法证明这些细节,反之亦然。基于深度神经网络(DNN)的现有作品大多为不同的模式构造一个公共空间,以找到它们之间的潜在对齐方式,而这些潜在的对齐方式却失去了它们专有的模式专有特性。与现有工作不同的是,我们通过为每个模态构造独立的语义空间,提出了模态特定的跨模态相似性度量(MCSM)方法,该方法采用端到端框架直接生成模态特定的跨模态相似度,而无需明确的共同表示。对于每个语义空间,循环注意力网络会充分利用一个模态中的特定于模式的特征,而另一种模态的数据将通过基于注意力的联合嵌入而投影到该空间中,以利用学习到的注意力权重来指导细粒度的跨模态相关性学习,可以捕获不同模态之间不平衡和互补的关系。最后,通过对特定模式的交叉模式相似性进行自适应融合以进行交叉模式检索,探索了不同模式语义空间之间的互补性。验证我们提出的方法的有效性,优于9个最新方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号